Trust-region and other regularisations of linear least-squares problems

نویسندگان

  • Coralia Cartis
  • Nicholas I. M. Gould
  • Philippe L. Toint
چکیده

We consider methods for regularising the least-squares solution of the linear system Ax = b. In particular, we propose iterative methods for solving large problems in which a trust-region bound ‖x‖ ≤ ∆ is imposed on the size of the solution, and in which the least value of linear combinations of ‖Ax−b‖q2 and a regularisation term ‖x‖ p 2 for various p and q = 1, 2 is sought. In each case, one of more “secular” equations are derived, and fast Newton-like solution procedures are suggested. The resulting algorithms are available as part of the GALAHAD optimization library.

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Jacobian-Free Three-Level Trust Region Method for Nonlinear Least Squares Problems

Nonlinear least squares (NLS) problems arise in many applications. The common solvers require to compute and store the corresponding Jacobian matrix explicitly, which is too expensive for large problems. In this paper, we propose an effective Jacobian free method especially for large NLS problems because of the novel combination of using automatic differentiation for J(x)v and J (x)v along with...

متن کامل

On the convergence of an inexact Gauss-Newton trust-region method for nonlinear least-squares problems with simple bounds

We introduce an inexact Gauss-Newton trust-region method for solving bound-constrained nonlinear least-squares problems where, at each iteration, a trust-region subproblem is approximately solved by the Conjugate Gradient method. Provided a suitable control on the accuracy to which we attempt to solve the subproblems, we prove that the method has global and asymptotic fast convergence properties.

متن کامل

On Iterative Krylov-Dogleg Trust-Region Steps for Solving Neural Networks Nonlinear Least Squares Problems

This paper describes a method of dogleg trust-region steps, or restricted Levenberg-Marquardt steps, based on a projection process onto the Krylov subspaces for neural networks nonlinear least squares problems. In particular, the linear conjugate gradient (CG) method works as the inner iterative algorithm for solving the linearized Gauss-Newton normal equation, whereas the outer nonlinear algor...

متن کامل

Inexact trust region method for large sparse nonlinear least squares

The main purpose of this paper is to show that linear least squares methods based on bidiagonalization, namely the LSQR algorithm, can be used for generation of trust region path. This property is a basis for an inexact trust region method which uses the LSQR algorithm for direction determination. This method is very efficient for large sparse nonlinear least squares as it is supported by numer...

متن کامل

Least squares methods in maximum likelihood problems

It is well known that the GaussNewton algorithm for solving nonlinear least squares problems is a special case of the scoring algorithm for maximizing log likelihoods. What has received less attention is that the computation of the current correction in the scoring algorithm in both its line search and trust region forms can be cast as a linear least squares problem. This is an important observ...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2008